Can Boosting with SVM as Week Learners Help?
نویسنده
چکیده
Object recognition in images involves identifying objects with partial occlusions, viewpoint changes, varying illumination, cluttered backgrounds. Recent work in object recognition uses machine learning techniques SVM-KNN, Local Ensemble Kernel Learning, Multiple Kernel Learning. In this paper, we want to utilize SVM as week learners in AdaBoost. Experiments are done with classifiers like nearest neighbor, k-nearest neighbor, Support vector machines, Local learning(SVMKNN) and AdaBoost. Models use Scale-Invariant descriptors and Pyramid histogram of gradient descriptors. AdaBoost is trained with set of week classifier as SVMs, each with kernel distance function on different descriptors. Results shows AdaBoost with SVM outperform other methods for Object Categorization dataset. 1 Survey of Recent Work SVM-KNN gets motivation from Local learning by Bottou and Vapnik uses K-Nearest neighbor to select local training point and uses SVM algorithm in those local training points for classification of object. Main problem here is time taken for classification. Local ensemble kernel learning was proposed to combine kernels locally. Multiple kernel learning for object recognition which uses PHOG feature vectors and shape descriptors gives state of art accuracy on CalTech-101. This method tries to learn discriminative power-invariance trade off. 1.1 Local Learning and SVM-KNN Vapnik and Bottou [1] proposed a local learning algorithm for optical character recognition. They proposed simple algorithm: For each testing pattern, Select few training patterns in the vicinity of the testing pattern. Train a neural network with only these few examples. Apply the resulting network to the testing pattern. They proved that this simple algorithm will improve accuracy of optical character recognition accuracy over other methods in 1996. Zhange and Malik et al uses the same idea for object recognition. The naive SVM-KNN algorithm they proposed is: For each query, • Compute nearest K neighbor of query point. • If K neighbors have all same labels, query is labeled and exit; else computer pairwise distance between K neighbors. • Convert distance matrix into kernel matrix and apply multi-class SVM. • Use the above classifier to label the query. This algorithm will be slow. So they proposed another version in which first they apply simple distance measure initially and select some examples. After that use costly distance measure on these selected data points. So now algorithm becomes: • Find collection of neighbors using simple distance measure. 1 ar X iv :1 60 4. 05 24 2v 1 [ cs .C V ] 1 8 A pr 2 01 6 • Compute costly distance measure on these examples and select K from these examples. • Compute pairwise distance between these K points. • Apply DAGSVM on the kernel matrix for training. They reported using this algorithm, achieved approximately 60% accuracy in the Caltech dataset which some small modification in using some other distance measure. 1.2 Local Ensemble Kernel Learning In this paper[2], proposed a model to learn kernels locally. Suppose there are C classes and S = (xi, yi) l i=1 where yi ∈ 1, 2, ..., C. Suppose there are M feature representation of the data xi. And each feature descriptor (1 ≤ r ≤ M) have distance function dr. Kernels (K1,K2, ...,KM ) are formed using dr applied in xi . Aim to combine these kernels. But applying kernel learning here will earn the good kernel globally. Here authors tried to combine the kernel locally to each sample. Let the neighborhood of a sample xi is defined by weighted vector wi = [wi,1, ..., wi,l] where wi,j = (w 1 i,j + ...+ w M i,j)/M w i,j = exp(−dr(xi, xj)) Let the local target kernel of xi is defined by Gi(p, q) = wi,pwi,qG(p, q) Now kernel alignment is done with the M Kernels and local target kernel to get the optimized kernel. Experiments in CalTech-101 dataset gives approximately 61% accuracy. Most object categorization dataset is manually labelled through crowdsourcing approach [3, 4]. Caltech-101 also contains face detection[4] dataset. 1.3 Improving Local Learning by Exploring the Effects of Ranking Here for each labeled sample, the proposed technique learns a local distance function and ranks its neighbors at the same time. For sample I, a distance function to rank its neighbors for improving object classification. Since a closer sample generally implies a higher probability to be included in a k-NN scheme, one would expect the distance function to be learned is affected more by those samples near I, specified by the distance function itself. That is, if we put the samples into an ordered neighbor list according to the measurements by the distance function in an increasing manner, the top portion of the list should be more influential in learning. P-Norm Push in tends to pay more attention to the top portion of a ranked list. In this method also, there will set of distance function available. And aim is to learn the weighted combination of these distance function for each sample. Weights for the distance function is learned according to maximum margin optimization method. So then given the test sample, training are ranked by the distance function and then according to the rank label is assigned to the test sample. Results in CalTech dataset they showed is around 70%. 1.4 Multiple Kernel Learning Goal of kernel learning is to learn a kernel which is optimal for the specified task. In multiple kernel learning, suppose there are n given data points (xi, yi), where xi ∈ X for some input space and yi ∈ −1,+1. Suppose there are m kernel matrices Kj ∈ R. The problem is to learn best linear combination of these kernel. That is to learn ηj in
منابع مشابه
Evaluation of Data Mining Algorithms for Detection of Liver Disease
Background and Aim: The liver, as one of the largest internal organs in the body, is responsible for many vital functions including purifying and purifying blood, regulating the body's hormones, preserving glucose, and the body. Therefore, disruptions in the functioning of these problems will sometimes be irreparable. Early prediction of these diseases will help their early and effective treatm...
متن کاملA New Ensemble Model based Support Vector Machine for Credit Assessing
With the rapid growth of internet finance, the credit assessing is becoming more and more important. An effective classification model will help financial institutions gain more profits and reduce the loss of bad debts. In this paper, we propose a new Support Vector Machine (SVM) based ensemble model (SVM-BRS) to address the issue of credit analysis. The model combines random subspace strategy ...
متن کاملLarge Margin Discriminant Dimensionality Reduction in Prediction Space
In this paper we establish a duality between boosting and SVM, and use this to derive a novel discriminant dimensionality reduction algorithm. In particular, using the multiclass formulation of boosting and SVM we note that both use a combination of mapping and linear classification to maximize the multiclass margin. In SVM this is implemented using a pre-defined mapping (induced by the kernel)...
متن کاملEfficacy of Symmetrical and Asymmetrical Pushed Negotiations in Boosting Speaking
This study was set out to shed light on the efficacy of pushed output directed by scaffolding on 41 (24 female and 17 male) upper-intermediate EFL learners’ speaking fluency and accuracy. A public version of IELTS speaking test was held to measure learners’ entrance behavior. Then, they were randomly assigned into symmetrical, asymmetrical, and control group. The experimental and control groups...
متن کاملEncoding Ordinal Features into Binary Features for Text Classification
We propose a method by means of which supervised learning algorithms that only accept binary input can be extended to use ordinal (i.e., integer-valued) input. This is much needed in text classification, since it becomes thus possible to endow these learning devices with term frequency information, rather than just information on the presence/absence of the term in the document. We test two dif...
متن کاملFrom Kernel Machines to Ensemble Learning
Ensemble methods such as boosting combine multiple learners to obtain better prediction than could be obtained from any individual learner. Here we propose a principled framework for directly constructing ensemble learning methods from kernel methods. Unlike previous studies showing the equivalence between boosting and support vector machines (SVMs), which needs a translation procedure, we show...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1604.05242 شماره
صفحات -
تاریخ انتشار 2016